Some Global Environmental Issues of Public Concern
نویسنده
چکیده
In this talk I will pick three key environmental issues that dominate the risk or impact, or dominate the public perception thereof: (a) What is the effect upon health of particulate air pollution at today's levels? Experts increasingly believe that fine particulates kill 70,000 people a year in the USA, but this has not yet been officially admitted by any government. (b) What is the effect of increased carbon dioxide emissions, from burning fossil fuels, on global climate change? There was intense government attention at Rio de Janeiro in 1992, in Berlin in 1995 and in Kyoto in 1996. The estimates of the International Program on Climate Change (IPCC) have changed but remain disturbingly high. What should the world do about it all? (c) I will briefly discuss another global environmental effect the pollution of water supplies, particularly in the Bengal Basin (West Bengal and Bangladesh) by arsenic and the implications this has for the world. I conclude with an argument for the future. Contrary to the defeatist (Luddite) attitude toward technology that many non-technologists have, I argue that the future of the human race demands an active intervention by people, to prevent such catastrophes as the Black Death (which probably reduced the population to one third) and all-out nuclear war. GENERAL INTRODUCTION I am giving this keynote talk somewhat under false pretenses. I am NOT an expert on climate change. Although, I was for nearly 5 years, Director of the NE Center of the National Institutes for Global Environmental Change (NIGEC) that duty served mostly to tell me how little I know.I have however been aware of the problem for 55 years and have written about a number of global issues. I will start by talking about an issue I think I understand the problems of air pollution, and end with a global problem no one understands arsenic pollution. Then I will make my appeal: if man wants to live in the world with the population we now have, we must not merely live with the environment we must learn how to manage crucial parts of it but manage it wisely. THE ISSUES OF LOW DOSE LINEARITY Underlying most environmental issues is whether or not there is linearity of effect with dose at low doses. I argue that it is probably true both for radiation and true for many other polluting agents. There is no doubt that high doses of radiation (500 Rems) have led to acute radiation sickness and death; and doses just less than these (100 Rems) have led to cancer. But there is much more doubt whether the low doses that arise from normal operation of nuclear power plants, and even doses to most of the people exposed in accident conditions, lead to any health problems. Since 1928 it has been conventional for prudent public policy, to assume that there is a linear relationship between radiation dose and response so that even small doses, if widely spread over a population, can produce an appreciable response. This assumption was originally suggested (implicitly) by Crowther (1924) and for many years was only made by those concerned about radiation exposure. This led to an (incorrect) feeling that anything involving radiation is uniquely dangerous. It is now realized that this low dose linearity assumption is probably equally valid (or invalid) for other carcinogens, and even other medical outcomes. This is an inherent consequence of the multistage theory of carcinogenesis, particularly in the form developed by Doll and Armitage (1954, 1957). Indeed the idea is more general: if a the medical outcome is indistinguishable from one caused by natural processes and the agent acts as the same way as the natural processes at any stage in the carcinogenic process, then almost any biological dose-response relationship will be differentially linear (Crump et al. 1976). Recently it has been realized that linearity may apply to many other situations such as particulate air pollution also (Crawford and Wilson 1976). This is crucial as we reevaluate data on air pollution. In 1925, for example, the cry was “the solution to pollution is dilution”, thereby bring all concentrations below an assumed threshold. This certainly reduced local pollution, but increased pollution at a distance and made a local problem into a global problem albeit one of smaller individual concern. THE EFFECTS OF AIR POLLUTION The effects of fossil fuel use on public health are primarily those of air pollution the liberation of gasses from the power plant as a result of fossil fuel burning. Burning of fossil fuels results in emission of gases from incomplete combustion as well as gases from impurities. There is a marked difference between fossil fuel and nuclear plants in these respects. The emissions from fossil fuel plants occur in ordinary operation and are continuous, whereas the only important emissions from nuclear plants occur in accident situations. The pollution from was noticed in the 15 century in England. In the 17 century Evelyn wrote a tract on the subject (Evelyn 1661). There is no doubt that air pollution and in particular the burning of coal HAS killed members of the public outside the power plant when air pollution levels were high. After a large fraction of people got sick and died in bad fogs in Meuse Valley, Belgium, and in Donora, Pennsylvania, people paid attention. Immediately after a London fog in December 1953 there were 4500 “excess deaths” (Beaver 1954) Figure 1). Deaths were due to a variety of causes all of which can also occur naturally. The British government took the immediate action of banning the burning of soft coal in the cities. The plentiful supply of oil from the Middle East enabled the UK economy to do without this burning. Figure 1.Daily mean pollution concentrations and daily numbers of deaths during the London Fog Episode of 1952. (Source: Beaver, 1954.) At that time the UK government’s scientific advisors, believing in a threshold, argued that (i) if pollutant levels could be reduced 5 fold the effect would vanish, (ii) the pollution only affected the aged and sick who would die in a few days or weeks anyway and (iii) the major problem was sulphur dioxide from burning of the sulphur pollution in the coal. It is my contention that all three scientific statements (i) (ii) and (iii) were wrong. There has been, and still is, a major controversy about (ii)in particular, and how to extrapolate these known hazards to the lower levels of today. During the 1970s I, and many others, thought that present day air pollution in the eastern US or northern Europe affects 1% of the people exposed (Wilson et al. 1982).But in an influential 1979 review (Holland et al. 1979) several prominent British scientists systematically discounted the evidence presented in the studies carried out so far, which merely compared average death rates with averaged outdoor pollution and are classified by epidemiologists as "ecological" studies .They concluded that the health effects of particulate pollution at low concentrations could not be "disentangled" from health effects of other factors. A major (unstated) reason for dismissal of the findings was the (correct) observation that by 1980 visible air pollution in many major cities (London, Glasgow, Pittsburgh, Moscow) had already been much reduced over the black periods of the first half of the century. No government took any further action. However there is now much stronger scientific justification. Exposure of animals to combustion products begins to elucidate mechanisms (Amdur 1989). Systematic correlation studies of death rates with air pollution in major cities show consistent results. Moreover two epidemiological cohort studies avoid many of the criticisms that applied to the ecological studies. The first was the "Six Cities study" (Dockery et al. 1992) involving a 14-16 year prospective follow-up of 8111 adults living in 6 U.S. cities: Harriman, Tennessee, St. Louis, Missouri, Steubenville, Ohio, Portage, Wisconsin, and Topeka, selected to be representative of the range of particulate air pollution. Measurements were made of total suspended particulates (TSP), PM10, PM2.5, SO4, H, SO2, NO2, and O3 levels. Although TSP concentrations dropped over the study periods, fine particulate and sulfate pollution concentrations were relatively constant. The most polluted city was Steubenville; the least polluted cities were Topeka and Portage. Differences in the probability of survival among the cities were statistically significant (P= 0.001). Individual health outcomes were compared with average exterior concentrations. Mortality risks were most strongly associated with cigarette smoking, but after controlling for individual differences in age, sex, cigarette smoking, body mass index, education, and occupational exposure, differences in relative mortality risks across the six cities were strongly associated with air pollution levels in those cities. These associations, shown in figure 2, are stronger for respirable particles and sulfates, as measured by PM10, PM2.5, and SO4, than for TSP, SO2, acidity (H), or ozone. The association between mortality risk and fine particulate air pollution was consistent and nearly linear, with no apparent "no effects" threshold level. The adjusted total mortality-rate ratio for the most polluted of the cities compared with the least polluted was 1.26 [95% confidence interval1.08-1.47]. Fine particulate pollution was associated with cardiopulmonary mortality and lung cancer mortality but not with the mortality due to other causes analyzed as a group. The results are substantially larger than those in one of the best earlier "ecological" studies. This difference suggests that the cohort study is able to estimate the effects of air pollution with more accuracy. Ecological (population) studies are averaging the observed effects over the affected population, therefore lower rates in these studies compared to cohort studies would be expected. Although the statistical associations were with area outdoor pollution measurements, subsidiary studies show that gases and fine particles easily penetrate indoors (in contrast to heavier and larger particles). Similar results were observed in a second, larger prospective cohort study.(Pope et al 1994)Approximately 500,000 adults drawn from the American Cancer Society (ACS) Cancer Prevention Study II (CPS-II) who lived in 151 different U.S. metropolitan areas were followed prospectively from 1982 through 1989.Individual risk-factor data and 8 year vital status data were collected. Ambient concentrations of sulfates and fine particles, which are relatively consistent indoors and out were used as indices of exposure to combustion source ambient particulate air pollution. Both fine and sulfate particles are used as indices of combustion source particulate pollution, which is considered by many to be a likely agent. Sulfate and fine particulate air pollution were associated with a difference of approximately 15 to 17 percent between total mortality risks in the most polluted cities and those in the least polluted cities. Figure 2.Estimated adjusted mortality rate-ratios from the Six-Cities Study Plotted against noninhalable particles (TSP-IP),the coarse fraction of inhalable particles (IP-FP), fine particles (PM25), and sulfate particles. (Source: Pope and Dockery, 1996). These and other, data are summarized in Wilson and Spengler (1996) and have been replicated and reviewed by Krewski et al. (2000).A simple application of these results to the continental USA suggests that 70,000 persons die early (have their lives shortened) by air pollution. I assume that 40% of these, or 30,000 deaths arise from the existing coal fired electricity generation (about 200 GW-yr) to get a coefficient of 150 deaths per GW-yr (Clean Air 2000). This, if true, dwarfs all other health problems of fuel use. A general model that might describe the effect is that the air pollution reduces lung function in an irreversible way. Lung function falls with age, and in the presence of the pollution could fall to a dangerous level when all sorts of ailments occur, at an earlier age than otherwise. This is shown diagramatically in figure 3.It is easy to see geometrically, that the calculated “loss of life expectancy” is directly proportional to the assumed lung damage, and death rate is also proportional. Various studies show that average reduction in lung function is related to air pollution variables although there is large individual variation. This is a delayed effect that is not easy to alter after the initial lung damage. But this is similar to the delayed effect of the cancer mortality in a nuclear power plant accident. The magnitude can be summarized by saying that air pollution, mostly from coal burning, but somewhat from oil burning also, causes more effects on public health than would a Chernobyl-size nuclear accident every year. A group called Clean Air (2000) recently calculated the PM2.5 contributions from power plants based on reported emissions in general agreement with this. Figure 3. Schematic of lung function vs age showing loss of life expectancy (LOLE).(Source: Wilson and Spengler, 1996) But there remains a huge uncertainty that is related to item (iii) above. Although it is likely that air pollution from fossil fuel burning causes adverse effects on health and likely that there is low dose linearity, it is far less sure what aspect of the air pollution is the problem. I believe that fine particles are involved. But exactly what aspect? Amdur (1989) showed that sulfate particulates are worse than sulphur dioxide for guinea pigs. But sulphur dioxide, emitted as a gas from power plants and evading all the particulate traps, converts to sulphates in the power plant plume as demonstrated unequivocally by measurements from a TVA power plant (McMurry and Zhang 1989).Many countries have therefore been active in reducing sulphur dioxide emissions. But it could also be the vandadium or zinc that often attaches to the particles. A crucial question is whether nitrates are as important (or as good a surrogate) as sulfates -because nitrate precursors are emitted from all automobiles and power plants. How can one find out? An ideal, but very expensive, epidemiological study would involve following more people (perhaps 100,000) in a prospective study, with each person carrying an personal monitor continuously instead of using the external, area, concentrations. But it is unlikely that this uncertainty will be resolved by epidemiology alone. It will need a careful combination of laboratory, animal and epidemiology experiments to elucidate the probable causes. Meanwhile it behooves us to be careful how we burn fossil fuels. We can, if we wish to spend the money, do a lot about particulate air pollution. At a power plant we can burn natural gas. Natural gas burns quite cleanly. It tends to be free of sulphur and of heavy metals like vanadium and zinc. The temperature can be well controlled so that fewer nitrogen oxides are produced. If we wish to burn coal we can first gasify it and remove these undesirable pollutants. Once we go beyond the simple exhortation, “avoid burning fossil fuels”, or trapping all the particles or particle precursors, the recommendations for public health are far less secure. There is, by now, enough sulphur control that the fine particles are mostly nitrates. If indeed they are as bad as the others, we must be extraordinarily careful about automobile and truck exhausts. Motor vehicles could also be controlled. Catalysts already do a great deal, and we can demand that SUVs and commercial vehicles are similarly equipped. Hybrid, gasoline/electric engines enable the fuel to be burned more cleanly and when fuel cells are used the particulates are more easily removed. Electric engines in automobiles would put all the pollution at the power plant. BUT the overall simplicity goes down (causing extra expense) and the efficiency may go down, increasing the production of CO2. And, of course, we can always replace the fossil fuels completely by some other fuel. HAVE MAN’S ACTIVITIES CAUSED GLOBAL WARMING? There is a long history of scientific study of global warming. Fourier (1835) may have been the first to notice that the earth is a greenhouse, kept warm by the atmosphere, which reduces the loss of infrared radiation. Without a greenhouse the temperature of the earth would be simply calculated by a balance between the absorption of energy from the sun and black body infrared radiation. It would be about 200 degrees Kelvin, or 245 degrees if some allowance is made for reduced absorption and emissivity. This is too low to sustain life as we know it. With a greenhouse to absorb the infrared radiation from 180 degrees and reemit radiation over 360 degrees, the temperature goes up by a factor equal to the fourth root of 2. Although this picture is still the basis, all sorts of complications set in as illustrated by figure 4 showing the estimates of global energy flows made by the International Program on Climate Change (IPCC) in 2000. Figure 4. The Earth’s annual and global mean energy balance. Of the incoming solar radiation, 49% (168 Wm-2) is absorbed by the surface. That heat is returned to the atmosphere as sensible heat, as evapotranspiration (latent heat) and as thermal infrared radiation. Most of this radiation is absorbed by the atmosphere, which in turn emits radiation both up and down. The radiation lost to space comes from cloud tops and atmospheric regions much colder than the surface. This causes a greenhouse effect. (Source: IPCC, 2001). Even in the 1800s carbon dioxide and water vapor were known to be major “greenhouse gases” since they absorb infrared radiation. They have since been joined by methane, nitrogen oxide, and fluorocarbons. It was also realized that the earth is a leaky greenhouse. Not all the infrared radiation is absorbed by the CO2. As the CO2 concentration increases, the greenhouse becomes better and the temperature will rise. Arrhenius (1896) was the first to quantitatively relate the concentration of carbon dioxide (CO2) in the atmosphere to the global temperature and its changes over the ages. I was first made aware of the problem in the fall of 1947 by the lectures by Dobson to the undergraduate physicists at Oxford. Scientific understanding has increased since then, particularly stimulated in the latter half of this century by the conclusion of Revelle and Suess (1957) that human emissions of CO2 exceed the rate of uptake by natural sources in the near term and that CO2 only mixes readily in shallow oceans and mixes with the deep oceans with a 500 year time constant. Then anthropogenic increases in concentration of CO2 exceed the natural fluctuation. The demonstration by Keeling (1989) (Figure 5a) showed dramatically that atmospheric CO2 is steadily increasing and the diurnal fluctuations were being exceeded. However, the fluctuations in CO2 over the millenia were demonstrated clearly by the VOSTOK ice cores taken in the Antarctic (Figure 5b). Figure 5. Variations in atmospheric CO2 concentration on different time scales. (a) Direct measurements of atmospheric CO2, and (b) CO2 concentration in the Vostok Antarctic ice core. (Source: IPCC, 2001). But the expected effect on temperature was slow to manifest itself. In figure 6, I show the surface temperature record for the last century and a half; although there was an increase of 0.5 degrees in the first half of this century, which occurred before the big rise in CO2 concentrations. I note that when I attended Professor Dobson’s lectures in 1947, the rise was still in progress, and there was a slight drop until 1980. That led many people, including some scientists to doubt the scientists' warnings (Seitz 1984). For this and other reasons, the warnings of global warming had little effect on public opinion and policy until the summer of 1988, when it was noted that five out of the previous six summers in the United States were the highest on record and a longterm global temperature record was presented to the U.S. Congress suggesting that a global mean warming had emerged above the background natural variation (Hansen 1981). Although Hansen’s specific presentation was based on erroneous data, the more recent rise in temperature shown in figure 6 tends to support his view. There have been criticisms that these data are inconsistent with satellite observations, and therefore unbelievable (Singer 2000). But a special panel of the National Academy of Sciences (NAS 2000) notes that “the warming trend (of the surface) during the last 20 years is undoubtedly real and is substantially greater than the average rate of warming during the 20 century”. The warming has been largely at high latitudes and in the center of continents, and is less in the troposphere: facts that were predicted by the global climate computer models. Figure 6. Combined annual land-surface air and sea surface temperature anomalies (C) 1861 to 1999, relative to 1961 to 1990. Two standard error uncertainties are shown as bars on the annual number. (Source: IPCC, 2001). The core of the scientific debate on global warming is about the temperature rise resulting from human activity and in particular from an increase in concentration of greenhouse gases. How well determined are the various parameters? To address this question I go back to a simple equation relating an outcome “h” (which might be the height of the ocean) with the various parameters. This equation has 6 factors. (Shlyakhter et al. 1995) To begin, the first factor is the world population; the second factor is energy production per capita; the third factor is the total CO2 emissions per unit of energy production leading to total CO2 emissions; the fourth factor is the increase of atmospheric concentration of CO2 per unit emission; the fifth factor is the temperature rise per unit of concentration; the sixth is the environmental outcome per unit temperature rise. Multiplying these factors together leads to an estimate of the final outcome. All calculations of global warming (that we have seen follow this layout and formula to some extent, although some ask a more limited question, and therefore only follow a part of the procedure. Although there has been a lot of advertisement about “Integrated Analyses of Climate Change” these have in most cases consisted of running computer programs separately for each of the factors in the equation and ignoring correlations between them. The first factor, projection of the world’s population, is already uncertain. It has been doubling at an increasing rate since 1600.Malthus and others argued that it would go on increasing until war and famine set in. This doubling suggested that there might be a disaster in the year 2000 with a doubling occurring very fast! I am glad to report that this has not happened. Demographers point out that there is a demographic transition. As a society becomes prosperous the birth rate goes down. The net reproduction rate has fallen below 1 in all developed societies and there are signs that it is happening in the people. Stirling Colgate of Los Alamos does not believe this optimistic prediction. He argues that there will always be fringe societies that act differently, and predicts a population explosion about 2070. The second factor is a major factor in contention. In the USA we use 10 kilowatts per person continuously, in Europe about half this, and in India the amount is 300 watts per person about a person’s internal heat generation rate. We all hope that developing countries will gradually become more prosperous and be able to afford more fuel, but few people anticipate this factor will rise to more that a two or three kilowatts averaged over the world’s population by 21001. Even fewer predict that the USA will do what all the rest of the world is encouraging it to doreduce the energy use per person from 10 kilowatts to 3 kilowatts. The third, crucial, factor depends almost entirely upon the extent that society will continue to obtain its energy from burning carbon. It is hard to criticize a hope that mankind will know how to interrupt the flow of energy from the sun and put it to good use. But there remain two contrasting views here. According to the first, mankind must use nuclear energy for the foreseeable future if the steady improvement in welfare of the human race is to be maintained. According to the second view that I do not share, we already know enough to use solar energy efficiently and at an affordable price. The fourth factor depends upon the fate of CO2 in the environment. Only about half of the CO2 emitted anthropogenically ends up in the atmosphere. The other half is absorbed in the environment. The factor is about0.5 and cannot be greater than 1.It is probably known to an accuracy of 30% better than any other in the equation. Factor 5 is the main output of General Circulation Models (GCMs), and it is here that the main scientific controversy lies. I argue that it has an uncertainty of a factor of three. We obviously know well the infrared absorption spectra of the various greenhouse gases. If they are uniformly distributed in the atmosphere, and the concentrations are not correlated, then we can “simply” calculate the effect on the greenhouse and hence calculate a temperature rise. Indeed all the greenhouse gases except one, water vapor, distribute almost uniformly around the globe. Although the absorption spectra overlap a little, this is a secondary effect. The IPCC have defined a peculiar concept called “radiative forcing” due to the various gases. This is the effect of each gas separately, assuming an existing temperature and concentration distribution. This can be well estimated for CO2, CH4 and fluorocarbons. Figure 7 shows the relative magnitude of the forcings, and the extent to which they are scientifically understood. If there were no change in the concentration of water vapor (such as would be the case if the Earth was dry), the globalmean surface temperature would increase by )Td = 1.2EC, for a static doubling of CO2, and this estimate is quite reliable. Simple calculations of the concentrations of these greenhouse gases (except water vapor) from known emissions are, for the most part, well understood. If temperature rise does not affect the concentrations, as is approximately the case for these gases, then the calculation of temperature rise would also be well understood. Also well understood is the fact that the radiative forcing effect of CO2 is dependent upon concentration. The absorption is complete at the main absorption lines, and as concentration increases it is only at the edge of these lines that absorption increases leading to a variation as (concentration). This then leads to a reliable estimate that the global-mean surface temperature would increase by )Td = 1EC, for a static doubling CO2 concentrations. This is an increase in CO2 concentrations that is maintained at a constant level over a long period of time. This is sometimes called an "equilibrium response" to a static, or quasi-static, doubling of CO2. Figure 7.Global, annual-mean radiative forcings (Wm-2) due to a number of agents for the period from pre-industrial (1750) to present (late 1990s: ~ 2000). (Source: IPCC, 2001.) More important, as anyone can see by looking upwards at the world's clouds, the concentrations of the most important greenhouse gas, namely, water vapor, vary rapidly over space and time. If temperature rises slightly because of increased CO2 concentrations, it is generally assumed that there will be an increase in evaporation of surface water, an increase in water vapor concentrations and hence an amplification of the temperature rise. Numerous interactive feedbacks from water (most importantly, water vapor, snow-ice albedo, and clouds), introduce considerable uncertainties into the estimates of the mean surface temperature rise, )Ts. The value of )Ts is roughly related to )Td by the formula )Ts=)Td/(1-f), where f denotes a sum of all feedbacks. The simple argument above suggests that there is a positive feedback. On the other hand, cloud feedback is the difference between the warming caused by the reduced emission of infrared radiation from the Earth into outer space, and by the cooling through reduced absorption of solar radiation. The net effect is determined by cloud amount, altitude, and cloud water content. As a result, the values of )Ts from different models vary from )Ts=1.9EC to )Ts=5.2EC (Cubasch and Cess 1990). Typical values for these parameters are )To = 1.2EC and f= 0.7, so that )Td = 4EC. It is important to note that some feedbacks of water vapor may not yet have been identified. In addition the lag of temperature rise is large enough that, with the present rate of increase ofCO2,. at the time that CO2 doubling is reached, only a 0.5EC ? 1.0EC temperature rise is expected, not the equilibrium temperature of 2.5EC. (Cubasch 1992, 1993). This range is shown in the next two figures. Figure 8 shows a range of predictions in the recent IPCC (2001) report, suggesting that at the end of 2100 there is likely to be a global temperature increase of 5 degrees but it could be as high as 11 degrees. Figure 9 from a more optimistic view by Patrick Michaels and others (Michaels et al. 2000) still shows an increase of over 1 degree by the year 2100.More importantly, all predictions show the rise continuing. Indeed the models all have within them a time delay. In addition, most experts believe that even small increases in the value of f could result in a runaway warming not estimated by any of the models, leading, ultimately, to a different stable (or quasi-stable) state of the Earth's climate (Stone, 1993). Figure 8. Simple model results: estimated historical anthropogenic radiative forcing up to 2000 followed by radiative forcing for the six illustrative SRES scenarios. The shading shows the envelope of forcing that encompasses the full set of thirty-five SRES scenarios (Source: IPCC, 2001). Figure 9. Observed and predicted warming. Note: Observed warming of the last three decades, when superimposed on typical climate model projections, shows a linear trend near the lowest value that the climate models predict and considerably below the mean projected warming.(Source P. Michaels, 2000). Factor 6 is the effect of the temperature rise on the particular societal parameter of interest. This factor is the most uncertain of the lot because it depends upon the most uncertain phenomenon of them all human behavior. It should be evident that the equation should branch just before factor 6, to allow for different possible outcomes. Alternatively, several equations may be discussed, and the overall outcomes related to each other (perhaps by a cost per unit outcome) and summed. The crucial issue that faces us in discussion of this factor is man’s extraordinary ability to adapt. As global interactions increase, so does the speed of this adaptive increase. What is the limit? Twenty years ago there were loose predictions of 1 to 10 meter rise in sea level due to global warming as the polar ice caps melt. These have now moderated and predictions are less than 1 meter. But in many locations of the world the local rise, or fall, in sea level due to “natural variations” is more than this. Since The Dutch have shown us, over the centuries, how to cope with a sea level that is higher than the level of the land, should we be worried? INTERNATIONAL TREATIES We must continuously remember that attempts to reduce concentrations of carbon dioxide, and hence global warming, is limited by the fundamental fact that carbon dioxide and other greenhouse gases (other than water vapor) spread widely. My fossil fuel burning affects in some degree every person on the planet. More than any other pollution problem this fact demands that decisions be taken collectively on what action to take. No mechanism proposed so far for such a collective decision of the human race seems satisfactory. On one extreme one could set a limit on the amount of CO2 per capita that each is permitted to emit, and on another one could set as a base line the amount emitted in recent years with proposed reductions below this amount. Chinese emit much less CO2 per capita than the world average so tend to prefer the first approach. The USA clearly emits more CO2 per capita than other countries so tends toward the second, although in the most recent years Congress has not accepted any approach. The Buenos Aires and Kyoto agreements were attempts to establish a baseline based upon the second approach. President George Bush had a good, and trusted, science advisor in Allan Bromley. Allan made sure that no commitment was made that could not be met. But the commitment to keep emissions of greenhouse gases in the year 2000 to no more than in 1990 was not being met by a restriction on CO2 but by the already agreed (in Montreal) ban on emissions of fluorocarbons. In Kyoto the US Clinton administration went much further without any clear plan. It is important to realize that in any ordinary reading of the Kyoto agreement, the USA would have to reduce emissions to 8% or so below 1990 levels by 2008-2012.This is 18% below today’s levels. If we allow nuclear power to be abandoned, that is another 5% a total of 23% which is hard to achieve in ten years. The USA is engaged in unseemly bargaining. We are fortunate in saving a large landmass, which was covered in forests, although some 60% was clear-cut by 1860. The re-growth of these forests involves much more uptake of CO2 than realized even 15 years ago expected to account for 8%4. But this is an unseemly legal strategy that tends to anger the rest of the world. But we could set in place, if we wish, a national program to expand nuclear power 4-fold. This could not be done by 2012 but a plan could be in place to do so within 20 years so that we would have a good story to tell the rest of the world of a good faith attempt to meet the Kyoto agreement. Vice-President Gore had no such plan. Let us hope that President Bush does.
منابع مشابه
Globalization as a Driver or Bottleneck for Sustainable Development: Some Empirical, Cross-National Reflections on Basic Issues of International Health Policy and Management
Background This article looks at the long-term, structural determinants of environmental and public health performance in the world system. Methods In multiple standard ordinary least squares (OLS) regression models, we tested the effects of 26 standard predictor variables, including the ‘four freedoms’ of goods, capital, labour and services, on the following indicators of sustainable develop...
متن کاملShifting public opinion on climate change: an empirical assessment of factors influencing concern over climate
This paper conducts an empirical analysis of the factors affecting U.S. public concern about the threat of climate change between January 2002 and December 2010. Utilizing Stimson’s method of constructing aggregate opinion measures, data from 74 separate surveys over a 9-year period are used to construct quarterly measures of public concern over global climate change. We examine five factors th...
متن کاملEngaging the public in biodiversity issues
To engage people in biodiversity and other environmental issues, one must provide the opportunity for enhanced understanding that empowers individuals to make choices and take action based on sound science and reliable recommendations. To this end, we must acknowledge some real challenges. Recent surveys show that, despite growing public concern, environmental issues still rank below many other...
متن کاملPublic Perception of Environmental Issues in a Developing Setting: Environmental Concern in Coastal Ghana.
OBJECTIVE: Balancing environmental quality with economic growth in less developed settings is clearly a challenge. Still surprisingly little empirical evidence has been brought to bear on the relative priority given environmental and socioeconomic issues among the residents themselves of such settings. This research explores such perceptions. METHODS: We undertake survey research with 2500 res...
متن کاملInternational public opinion on the environment.
This article analyzes public opinion data on environmental issues collected in two major surveys. The data reveal substantial concern about the environment in both developing and industrial countries along with perceptions that the quality of the environment has declined and will continue to decline. Developing country respondents rate their local and national environmental quality lower than d...
متن کاملBiological diversity, ecology, and global climate change.
Worldwide climate change and loss of biodiversity are issues of global scope and importance that have recently become subjects of considerable public concern. Unlike classical public health issues and many environmental issues, their perceived threat lies in their potential to disrupt ecological functioning and stability rather than from any direct threat that may pose to human health. Over the...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
عنوان ژورنال:
دوره شماره
صفحات -
تاریخ انتشار 2001